You must configure SAP HANA
on SAP Cloud Platform before you can use it as a
data source in Data Virtualization.
About this task
To connect to a SAP HANA instance on
SAP Cloud Platform, Cloud Pak for Data requires the use of an intermediary on-premises
machine. This machine is used to securely reroute JDBC connections from Cloud Pak for Data to the SAP Cloud Platform instance.
SAP Cloud Platform provides two different cloud
environments: Cloud Foundry environments and Neo
environments. These instructions apply for SAP HANA service instances on SAP Cloud Platform that run on one of these environments.
If your SAP HANA cloud service is located
in an Amazon Web Services or Google Cloud Platform region, and was provisioned after 4th
June 2018, you can connect to the SAP HANA
service instance directly by using the standard SAP HANA JDBC client. For details, see Adding SAP HANA.
Procedure
To add the SAP HANA on
SAP Cloud Platform data source:
- Go to SAP Development Tools to install the correct version of the SDK on to the on-premises
machine:
- For Cloud Foundry environments, see SAP Cloud
SDK.
- For Neo environments, you can find SDK packages in the SAP Cloud Platform Neo Environment SDK section.
- To connect to a SAP HANA instance on SAP Cloud Platform from the on-premises machine, create a secure
SSH tunnel to the instance:
- For Cloud Foundry environments, use the
Cloud Foundry CLI from the SDK package you
installed in step 1. Log in to Cloud Foundry and
create an SSH tunnel by running the following
command:
cf ssh -L localhost:30015:hostname:port database_name -N
Replace
the hostname and port variables with the hostname and port of
the SAP HANA database.
The command
creates an encrypted tunnel from port 30015 on the on-premises machine to your SAP HANA instance on the SAP Cloud Platform database.
- For Neo environments, use the Neo SDK package you installed in step 1 and run the following
command to create an SSH
tunnel:
neo open-db-tunnel -a SAP_account_technical_name -h -u -i SAP_ID_or_alias
The following example uses a trial
SAP Cloud Platform account. The example uses
hanatrial.ondemand.com
as the public endpoint and "
hxe
" as the
database service
ID.
neo.sh open-db-tunnel -h hanatrial.ondemand.com -a p2001966692trial -u name@ibm.com -i hxe
The example command creates a tunnel on localhost:30015
.
- Select a JDBC Driver JAR from the drop-down
list.
To upload a JDBC Driver JAR file:
- Enter SAP HANA in the
Connection type field.
- Upload the ngdbc.jar file.
- For the driver class name, enter
com.sap.db.jdbc.DriverSapDB.
- For the JDBC URL prefix, enter jdbc:sap.
- Click Upload.
- Connect the SAP HANA instance on SAP Cloud Platform to Cloud Pak for Data via the open tunnel you created on the on-premises
machine. On the Connections page:
- Select SAP HANA as the Connection type.
- Enter SAP HANA as the Connection name.
- Enter the JDBC URL for the tunnel you created on the on-premises machine. Enter the
username and password of the SAP HANA instance
on SAP Cloud Platform.
The JDBC URL must
have the following
format:
jdbc:sap://hostname:port[/?<options>]
Replace
the hostname and port variables with the hostname and port
30015 of the on-premises machine.
- Click Create to add SAP HANA
SAP Cloud Platform as a data source connected to
Data Virtualization.
Results
You can now use your SAP HANA on
SAP Cloud Platform database as a data source in
Data Virtualization.