Apache Hive connection

To access your data in Apache Hive, you must create a connection asset for it.

Apache Hive is a data warehouse software project that provides data query and analysis and is built on top of Apache Hadoop.

Supported versions

Apache Hive 1.0.x, 1.1.x, 1.2.x. 2.0.x, 2.1.x, 3.0.x, 3.1.x.

Prerequisite for Kerberos authentication

To use Kerberos authentication, the data source must be configured for Kerberos and the service that you plan to use the connection in must support Kerberos. For more information, see Enabling platform connections to use Kerberos authentication.

Create a connection to Apache Hive

To create the connection asset, you need the following connection details:

  • Database name
  • Hostname or IP address
  • Port number
  • HTTP path (Optional): The path of the endpoint such as the gateway, default, or hive if the server is configured for the HTTP transport mode.
  • If required by the database server, the SSL certificate.

Authentication method

You can choose Kerberos credentials or Username and password.
For Kerberos credentials, you must complete the prerequisite for Kerberos authentication and you need the following connection details:

  • Service principal name (SPN) that is configured for the data source
  • User principal name to connect to the Kerberized data source
  • The keytab file for the user principal name that is used to authenticate to the Key Distribution Center (KDC)

ZooKeeper discovery (optional)

Select Use ZooKeeper discovery to ensure continued access to the connection in case the Apache Hive server that you log in to fails.

Prerequisites for ZooKeeper discovery:

  • ZooKeeper must be configured in your Hadoop cluster.
  • The Hive service in the Hadoop cluster must be configured for ZooKeeper, along with the ZooKeeper namespace.
  • Alternative servers for failover.

Enter the ZooKeeper namespace and a comma-separated list of alternative servers in this format:
hostname1:port-number1,hostname2:port-number2,hostname3:port-number3

For Credentials and Certificates, you can use secrets if a vault is configured for the platform and the service supports vaults. For information, see Using secrets from vaults in connections.

Choose the method for creating a connection based on where you are in the platform

In a project
Click Assets > New asset > Data access tools > Connection. See Adding a connection to a project.
In a catalog
Click Add to catalog > Connection. See Adding a connection asset to a catalog.
In a deployment space
Click Add to space > Connection. See Adding connections to a deployment space.
In the Platform assets catalog
Click New connection. For more information, see Adding platform connections.

Next step: Add data assets from the connection

Where you can use this connection

You can use the Apache Hive connection in the following workspaces and tools:

Projects

Catalogs

  • Platform assets catalog

  • Other catalogs (Watson Knowledge Catalog)

Watson Query service
You can connect to this data source from Watson Query.

Federal Information Processing Standards (FIPS) compliance

The Apache Hive connection is compliant with FIPS except for:

  • A connection that requires SSL
  • Kerberos authentication

Apache Hive setup

Apache Hive installation and configuration

Restriction

For all services except DataStage, you can use this connection only for source data. You cannot write to data or export data with this connection. In DataStage, you can use this connection as a target if you select Use DataStage properties in the connector's properties.

Running SQL statements

To ensure that your SQL statements run correctly, refer to the SQL Operations in the Apache Hive documentation for the correct syntax.

Learn more

Apache Hive documentation

Parent topic: Supported connections